skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Nair, Adithya"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Deep Reinforcement Learning (DRL) has shown promise for voltage control in power systems due to its speed and model‐free nature. However, learning optimal control policies through trial and error on a real grid is infeasible due to the mission‐critical nature of power systems. Instead, DRL agents are typically trained on a simulator, which may not accurately represent the real grid. This discrepancy can lead to suboptimal control policies and raises concerns for power system operators. In this paper, we revisit the problem of RL‐based voltage control and investigate how model inaccuracies affect the performance of the DRL agent. Extensive numerical experiments are conducted to quantify the impact of model inaccuracies on learning outcomes. Specifically, techniques that enable the DRL agent are focused on learning robust policies that can still perform well in the presence of model errors. Furthermore, the impact of the agent's decisions on the overall system loss are analyzed to provide additional insight into the control problem. This work aims to address the concerns of power system operators and make DRL‐based voltage control more practical and reliable. 
    more » « less